228 research outputs found
Machine Systems for Exploration and Manipulation: A Conceptual Framework and Method of Evaluation
A conceptual approach to describing and evaluating problem-solving by robotic systems is offered. One particular problem of importance to the field of robotics, disassembly, is considered. A general description is provided of an effector system equipped with sensors that interacts with objects for purposes of disassembly and that learns as a result. The system\u27s approach is bottom up, in that it has no a priori knowledge about object categories. It does, however, have pre-existing methods and strategies for exploration and manipulation. The sensors assumed to be present are vision, proximity, tactile, position, force, and thermal. The system\u27s capabilities are described with respect to two phases: object exploration and manipulation. Exploration takes the form of executing exploratory procedures, algorithms for determining the substance, structure, and mechanical properties of objects. Manipulation involves manipulatory operators, defined by the type of motion, nature of the end-effector configuration, and precise parameterization. The relation of the hypothesized system to existing implementations is described, and a means of evaluating it is also proposed
Learning efficient haptic shape exploration with a rigid tactile sensor array
Haptic exploration is a key skill for both robots and humans to discriminate
and handle unknown objects or to recognize familiar objects. Its active nature
is evident in humans who from early on reliably acquire sophisticated
sensory-motor capabilities for active exploratory touch and directed manual
exploration that associates surfaces and object properties with their spatial
locations. This is in stark contrast to robotics. In this field, the relative
lack of good real-world interaction models - along with very restricted sensors
and a scarcity of suitable training data to leverage machine learning methods -
has so far rendered haptic exploration a largely underdeveloped skill. In the
present work, we connect recent advances in recurrent models of visual
attention with previous insights about the organisation of human haptic search
behavior, exploratory procedures and haptic glances for a novel architecture
that learns a generative model of haptic exploration in a simulated
three-dimensional environment. The proposed algorithm simultaneously optimizes
main perception-action loop components: feature extraction, integration of
features over time, and the control strategy, while continuously acquiring data
online. We perform a multi-module neural network training, including a feature
extractor and a recurrent neural network module aiding pose control for storing
and combining sequential sensory data. The resulting haptic meta-controller for
the rigid tactile sensor array moving in a physics-driven
simulation environment, called the Haptic Attention Model, performs a sequence
of haptic glances, and outputs corresponding force measurements. The resulting
method has been successfully tested with four different objects. It achieved
results close to while performing object contour exploration that has
been optimized for its own sensor morphology
Perception of 3-D Location Based on Vision, Touch, and Extended Touch
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate
Effects of virtual acoustics on dynamic auditory distance perception
Sound propagation encompasses various acoustic phenomena including
reverberation. Current virtual acoustic methods, ranging from parametric
filters to physically-accurate solvers, can simulate reverberation with varying
degrees of fidelity. We investigate the effects of reverberant sounds generated
using different propagation algorithms on acoustic distance perception, i.e.,
how faraway humans perceive a sound source. In particular, we evaluate two
classes of methods for real-time sound propagation in dynamic scenes based on
parametric filters and ray tracing. Our study shows that the more accurate
method shows less distance compression as compared to the approximate,
filter-based method. This suggests that accurate reverberation in VR results in
a better reproduction of acoustic distances. We also quantify the levels of
distance compression introduced by different propagation methods in a virtual
environment.Comment: 8 Pages, 7 figure
Combining Locations from Working Memory and Long-Term Memory into a Common Spatial Image
This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM
- …